MIT Unveils a Method Inspired by Large Language Models to Teach Robots New Skills for the First Time
This week, MIT showcased a new model for training robots, aimed at addressing the challenges faced by imitation learning when introduced to small obstacles. Researchers pointed out that imitation learning can fail in conditions such as varying lighting, different environments, or new obstacles because robots simply do not have enough data to adapt. The team sought powerful data methods, like those used in models such as GPT-4, to tackle the problem. They introduced a new architecture called the Heterogeneous Pretrained Transformer (HPT), which aggregates information from different sensors and environments.